360 research outputs found

    On the Approximation Quality of Markov State Models

    Get PDF
    We consider a continuous-time Markov process on a large continuous or discrete state space. The process is assumed to have strong enough ergodicity properties and to exhibit a number of metastable sets. Markov state models (MSMs) are designed to represent the effective dynamics of such a process by a Markov chain that jumps between the metastable sets with the transition rates of the original process. MSMs have been used for a number of applications, including molecular dynamics, for more than a decade. Their approximation quality, however, has not yet been fully understood. In particular, it would be desirable to have a sharp error bound for the difference in propagation of probability densities between the MSM and the original process on long timescales. Here, we provide such a bound for a rather general class of Markov processes ranging from diffusions in energy landscapes to Markov jump processes on large discrete spaces. Furthermore, we discuss how this result provides formal support or shows the limitations of algorithmic strategies that have been found to be useful for the construction of MSMs. Our findings are illustrated by numerical experiments

    Estimating the eigenvalue error of Markov State Models

    Get PDF
    We consider a continuous-time, ergodic Markov process on a large continuous or discrete state space. The process is assumed to exhibit a number of metastable sets. Markov state models (MSM) are designed to represent the effective dynamics of such a process by a Markov chain that jumps between the metastable sets with the transition rates of the original process. MSM are used for a number of applications, including molecular dynamics (cf. Noe et al, PNAS(106) 2009)[1], since more than a decade. The rigorous and fully general (no zero temperature limit or comparable restrictions) analysis of their approximation quality, however, has only been started recently. Our first article on this topics (Sarich et al, MMS(8) 2010)[2] introduces an error bound for the difference in propagation of probability densities between the MSM and the original process on long time scales. Herein we provide upper bounds for the error in the eigenvalues between the MSM and the original process which means that we analyse how well the longest timescales in the original process are approximated by the MSM. Our findings are illustrated by numerical experiments

    On Markov State Models for Metastable Processes

    Get PDF
    We consider Markov processes on large state spaces and want to find low-dimensional structure-preserving approximations of the process in the sense that the longest timescales of the dynamics of the original process are reproduced well. Recent years have seen the advance of so-called Markov state models (MSM) for processes on very large state spaces exhibiting metastable dynamics. It has been demonstrated that MSMs are especially useful for modelling the interesting slow dynamics of biomolecules (cf. Noe et al, PNAS(106) 2009) and materials. From the mathematical perspective, MSMs result from Galerkin projection of the transfer operator underlying the original process onto some low-dimensional subspace which leads to an approximation of the dominant eigenvalues of the transfer operators and thus of the longest timescales of the original dynamics. Until now, most articles on MSMs have been based on full subdivisions of state space, i.e., Galerkin projections onto subspaces spanned by indicator functions. We show how to generalize MSMs to alternative low-dimensional subspaces with superior approximation properties, and how to analyse the approximation quality (dominant eigenvalues, propagation of functions) of the resulting MSMs. To this end, we give an overview of the construction of MSMs, the associated stochastics and functional-analysis background, and its algorithmic consequences. Furthermore, we illustrate the mathematical construction with numerical examples

    Optimal Fuzzy Aggregation of Networks

    Get PDF
    This paper is concerned with the problem of fuzzy aggregation of a network with non-negative weights on its edges into a small number of clusters. Specifically we want to optimally define a probability of affiliation of each of the n nodes of the network to each of m < n clusters or aggregates. We take a dynamical perspective on this problem by analyzing the discrete-time Markov chain associated with the network and mapping it onto a Markov chain describing transitions between the clusters. We show that every such aggregated Markov chain and affiliation function can be lifted again onto the full network to define the so-called lifted transition matrix between the nodes of the network. The optimal aggregated Markov chain and affiliation function can then be determined by minimizing some appropriately defined distance between the lifted transition matrix and the transition matrix of the original chain. In general, the resulting constrained nonlinear minimization problem comes out to have continuous level sets of minimizers. We exploit this fact to devise an algorithm for identification of the optimal cluster number by choosing specific minimizers from the level sets. Numerical minimization is performed by some appropriately adapted version of restricted line search using projected gradient descent. The resulting algorithmic scheme is shown to perform well on several test examples
    • …
    corecore